skip to main content


Search for: All records

Creators/Authors contains: "Karahalios, Karrie"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Errors in AI grading and feedback are by their nature non-deterministic and difficult to completely avoid. Since inaccurate feedback potentially harms learning, there is a need for designs and workflows that mitigate these harms. To better understand the mechanisms by which erroneous AI feedback impacts students’ learning, we conducted surveys and interviews that recorded students’ interactions with a short-answer AI autograder for ``Explain in Plain English'' code reading problems. Using causal modeling, we inferred the learning impacts of wrong answers marked as right (false positives, FPs) and right answers marked as wrong (false negatives, FNs). We further explored explanations for the learning impacts, including errors influencing participants’ engagement with feedback and assessments of their answers’ correctness, and participants’ prior performance in the class. FPs harmed learning in large part due to participants’ failures to detect the errors. This was due to participants not paying attention to the feedback after being marked as right, and an apparent bias against admitting one’s answer was wrong once marked right. On the other hand, FNs harmed learning only for survey participants, suggesting that interviewees’ greater behavioral and cognitive engagement protected them from learning harms. Based on these findings, we propose ways to help learners detect FPs and encourage deeper reflection on FNs to mitigate learning harms of AI errors. 
    more » « less
    Free, publicly-accessible full text available August 7, 2024
  2. With the increasing adoption of smart home devices, users rely on device automation to control their homes. This automation commonly comes in the form of smart home routines, an abstraction available via major vendors. Yet, questions remain about how a system should best handle conflicts in which different routines access the same devices simultaneously. In particular---among the myriad ways a smart home system could handle conflicts, which of them are currently utilized by existing systems, and which ones result in the highest user satisfaction? We investigate the first question via a survey of existing literature and find a set of conditions, modifications, and system strategies related to handling conflicts. We answer the second question via a scenario-based Mechanical-Turk survey of users interested in owning smart home devices and current smart home device owners (N=197). We find that: (i) there is no context-agnostic strategy that always results in high user satisfaction, and (ii) users' personal values frequently form the basis for shaping their expectations of how routines should execute. 
    more » « less
  3. The configuration that an instructor enters into an algorithmic team formation tool determines how students are grouped into teams, impacting their learning experiences. One way to decide the configuration is to solicit input from the students. Prior work has investigated the criteria students prefer for team formation, but has not studied how students prioritize the criteria or to what degree students agree with each other. This paper describes a workflow for gathering student preferences for how to weight the criteria entered into a team formation tool, and presents the results of a study in which the workflow was implemented in four semesters of the same project-based design course. In the most recent semester, the workflow was supplemented with an online peer discussion to learn about students' rationale for their selections. Our results show that students want to be grouped with other students who share the same course commitment and compatible schedules the most. Students prioritize demographic attributes next, and then task skills such as programming needed for the project work. We found these outcomes to be consistent in each instance of the course. Instructors can use our results to guide team formation in their own project-based design courses and replicate our workflow to gather student preferences for team formation in any course. 
    more » « less
  4. null (Ed.)
  5. Team formation tools assume instructors should configure the criteria for creating teams, precluding students from participating in a process affecting their learning experience. We propose LIFT, a novel learner-centered workflow where students propose, vote for, and weigh the criteria used as inputs to the team formation algorithm. We conducted an experiment (N=289) comparing LIFT to the usual instructor-led process, and interviewed participants to evaluate their perceptions of LIFT and its outcomes. Learners proposed novel criteria not included in existing algorithmic tools, such as organizational style. They avoided criteria like gender and GPA that instructors frequently select, and preferred those promoting efficient collaboration. LIFT led to team outcomes comparable to those achieved by the instructor-led approach, and teams valued having control of the team formation process. We provide instructors and designers with a workflow and evidence supporting giving learners control of the algorithmic process used for grouping them into teams. 
    more » « less
  6. null (Ed.)